Bounds on the Trained Vector Quantizer Distortion Measured Using Training Data

نویسندگان

  • Dong Sik Kim
  • Mark R. Bell
چکیده

Quantzzat~on can effectively reduce the huge amount of data with possibly small error (called quantzzatton error). In designing a quantizer using a portion of the data as a training data , the training algorithm tries to find a codebook that minimizes the quantization error measured in the training data. I t is known that , under several conditions, the minimized quantizat.ion error approaches the opt,imal error for the underlying distribution of the training da ta as the training data size increases. In this report, an upper bound for the minimized quantization error from the training data is derived as a function of the ratio of the training da ta size t o the codebook size. This bound enables us t o observe the coiivergence behavior of the trained quantizers as the training da ta size increases.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Theory and Practice of Vector Quantizers Trained on Small Training Sets

We examine how the performance of a memoryless vector quantizer changes as a function of its training set size. Speci cally, we study how well the training set distortion predicts test distortion when the training set is a randomly drawn subset of blocks from the test or training image(s). Using the Vapnik-Chervonenkis dimension, we derive formal bounds for the di erence of test and training di...

متن کامل

On the training distortion of vector quantizers

The in-training-set performance of a vector quantizer as a function of its training set size is investigated. For squared error distortion and independent training data, worst case type upper bounds are derived on the minimum training distortion achieved by an empirically optimal quantizer. These bounds show that the training distortion can underestimate the minimum distortion of a truly optima...

متن کامل

The Minimax Distortion Redundancy In Empirical Quantizer Design - Information Theory, IEEE Transactions on

We obtain minimax lower and upper bounds for the expected distortion redundancy of empirically designed vector quantizers. We show that the mean-squared distortion of a vector quantizer designed from n independent and identically distributed (i.i.d.) data points using any design algorithm is at least (n ) away from the optimal distortion for some distribution on a bounded subset of R. Together ...

متن کامل

The Minimax Distortion Redundancy in Empirical Quantizer Design

We obtain minimax lower and upper bounds for the expected distortion redundancy of empirically designed vector quantizers. We show that the mean squared distortion of a vector quantizer designed from n i.i.d. data points using any design algorithm is at least n 1=2 away from the optimal distortion for some distribution on a bounded subset of Rd. Together with existing upper bounds this result s...

متن کامل

Rates of convergence in the source coding theorem, in empirical quantizer design, and in universal lossy source coding

Abstruct-Rate of convergence results are established for vector quantization. Convergence rates are given for an increasing vector dimension and/or an increasing training set size. In particular, the following results are shown for memoryless realvalued sources with bounded support at transmission rate R: (1) If a vector quantizer with fixed dimension k is designed to minimize the empirical mea...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2013